Multi-class ensemble classification remains a popular focus of investigation within the research community. The popularization of cloud services has sped up their adoption due to the ease of deploying large-scale machine-learning models. It has also drawn the attention of the industrial sector because of its ability to identify common problems in production. However, there are challenges to conform an ensemble classifier, namely a proper selection and effective training of the pool of classifiers, the definition of a proper architecture for multi-class classification, and uncertainty quantification of the ensemble classifier. The robustness and effectiveness of the ensemble classifier lie in the selection of the pool of classifiers, as well as in the learning process. Hence, the selection and the training procedure of the pool of classifiers play a crucial role. An (ensemble) classifier learns to detect the classes that were used during the supervised training. However, when injecting data with unknown conditions, the trained classifier will intend to predict the classes learned during the training. To this end, the uncertainty of the individual and ensemble classifier could be used to assess the learning capability. We present a novel approach for novel detection using ensemble classification and evidence theory. A pool selection strategy is presented to build a solid ensemble classifier. We present an architecture for multi-class ensemble classification and an approach to quantify the uncertainty of the individual classifiers and the ensemble classifier. We use uncertainty for the anomaly detection approach. Finally, we use the benchmark Tennessee Eastman to perform experiments to test the ensemble classifier's prediction and anomaly detection capabilities.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
流程挖掘是一组技术,该技术被组织用于理解和改善其运营流程。设计任何流程重新设计程序的第一步是找到过程改进机会。在现有的工作中,通常假定在事先检测或易于检测到的有问题的过程实例集合中发生不良结果。因此,过程增强程序涉及在这些过程实例中找到根本原因和问题的处理。例如,有问题的实例集被视为具有异常值或值的值或大于该过程特征之一中给定阈值的值。但是,在各种情况下,使用这种方法,遗漏了许多流程增强机会,而不是这些有问题的过程实例所捕获的。为了克服这个问题,我们将找到过程增强区域作为上下文敏感的异常/异常检测问题。我们将过程增强区域定义为一组情况(过程实例或过程实例的前缀),其中过程性能令人惊讶。我们的目的是表征那些过程/结果/结果与在类似情况下的性能/结果明显不同的情况。为了评估拟议方法的有效性和相关性,我们已经对几个现实生活事件日志进行了实施和评估。
translated by 谷歌翻译
辅助抗菌处方的人工智能(AI)提出了重大的道德问题。利用与AI驱动的系统一起利用道德框架,同时考虑特定的复杂性,可以支持道德决策以应对抗菌抗性。
translated by 谷歌翻译
尽管对生物学上合理的学习规则进行了广泛的理论工作,但很难获得有关大脑中是否以及如何实施此类规则的明确证据。我们考虑在生物学上合理的监督和加强学习规则,并询问学习过程中网络活动的变化是否可以用于确定正在使用哪种学习规则。有监督的学习需要一个信用分配模型来估计从神经活动到行为的映射,并且在生物生物体中,该模型将不可避免地是理想映射的不完善的近似,从而导致相对于重量更新的偏见真正的梯度。另一方面,强化学习不需要信用分配模型,并且倾向于按照真正的梯度方向进行体重更新。我们得出一个指标,通过观察学习过程中网络活动的变化来区分学习规则,鉴于实验者已经知道了从大脑到行为的映射。由于脑机界面(BMI)实验允许对该映射进行完美了解,因此我们专注于使用复发性神经网络对光标控制BMI任务进行建模,这表明可以在模拟实验中使用神经科学实验者将在模拟实验中进行区分,以区分学习规则。合理地可以访问。
translated by 谷歌翻译
捕获一般的变形场景对于许多计算机图形和视觉应用至关重要,当只有单眼RGB视频可用时,这尤其具有挑战性。竞争方法假设密集的点轨道,3D模板,大规模训练数据集或仅捕获小规模的变形。与这些相反,我们的方法UB4D在挑战性的情况下超过了先前的艺术状态,而没有做出这些假设。我们的技术包括两个新的,在非刚性3D重建的背景下,组件,即1)1)针对非刚性场景的基于坐标的和隐性的神经表示,这使动态场景无偏重建,2)新颖的新颖。动态场景流量损失,可以重建较大的变形。我们的新数据集(将公开可用)的结果表明,就表面重建精度和对大变形的鲁棒性而言,对最新技术的明显改善。访问项目页面https://4dqv.mpi-inf.mpg.de/ub4d/。
translated by 谷歌翻译
机器学习和认知科学的最新工作表明,了解因果信息对于智力的发展至关重要。使用``Blicket otter''环境的认知科学的广泛文献表明,孩子们擅长多种因果推理和学习。我们建议将该环境适应机器​​学习代理。当前机器学习算法的关键挑战之一是建模和理解因果关系:关于因果关系集的可转移抽象假设。相比之下,即使是幼儿也会自发学习和使用因果关系。在这项工作中,我们提出了一个新的基准 - 一种灵活的环境,可以评估可变因果溢出物下的现有技术 - 并证明许多现有的最新方法在这种环境中概括了困难。该基准的代码和资源可在https://github.com/cannylab/casual_overhypothess上获得。
translated by 谷歌翻译
在过去的三十年中,规划界一直探索了无数的数据驱动模型采集方法。这些范围是复杂的(例如,简单的设置操作到全面的重新汇总),方法论(例如,基于逻辑的基于逻辑与基于策划)和假设(例如,完全与部分可观察到)。该空间中不少于43个出版物,了解在新环境中应该或应该采用哪种方法可能是压倒性的。我们提出了动作模型采集空间的整体表征,并进一步引入了自动化动作模型采集的统一框架。我们已经重新实现了该地区的一些具有里程碑意义的方法,我们对所有技术的表征都深入了解了剩下的研究机会。即,那些无法解决技术的设置。
translated by 谷歌翻译